Search Results Heading

MBRLSearchResults

mbrl.module.common.modules.added.book.to.shelf
Title added to your shelf!
View what I already have on My Shelf.
Oops! Something went wrong.
Oops! Something went wrong.
While trying to add the title to your shelf something went wrong :( Kindly try again later!
Are you sure you want to remove the book from the shelf?
Oops! Something went wrong.
Oops! Something went wrong.
While trying to remove the title from your shelf something went wrong :( Kindly try again later!
    Done
    Filters
    Reset
  • Discipline
      Discipline
      Clear All
      Discipline
  • Is Peer Reviewed
      Is Peer Reviewed
      Clear All
      Is Peer Reviewed
  • Reading Level
      Reading Level
      Clear All
      Reading Level
  • Content Type
      Content Type
      Clear All
      Content Type
  • Year
      Year
      Clear All
      From:
      -
      To:
  • More Filters
      More Filters
      Clear All
      More Filters
      Item Type
    • Is Full-Text Available
    • Subject
    • Publisher
    • Source
    • Donor
    • Language
    • Place of Publication
    • Contributors
    • Location
271,443 result(s) for "Research and Analysis Methods"
Sort by:
The structure and dynamics of cities : urban data analysis and theoretical modeling
\"With over half of the world's population now living in urban areas, the ability to model and understand the structure and dynamics of cities is becoming increasingly valuable. Combining new data with tools and concepts from statistical physics and urban economics, this book presents a modern and interdisciplinary perspective on cities and urban systems. Both empirical observations and theoretical approaches are critically reviewed, with particular emphasis placed on derivations of classical models and results, along with analysis of their limits and validity. Key aspects of cities are thoroughly analyzed, including mobility patterns, the impact of multimodality, the coupling between different transportation modes, the evolution of infrastructure networks, spatial and social organisation, and interactions between cities. Drawing upon knowledge and methods from areas of mathematics, physics, economics and geography, the resulting quantitative description of cities will be of interest to all those studying and researching how to model these complex systems\"-- Provided by publisher.
A simple method to assess and report thematic saturation in qualitative research
Data saturation is the most commonly employed concept for estimating sample sizes in qualitative research. Over the past 20 years, scholars using both empirical research and mathematical/statistical models have made significant contributions to the question: How many qualitative interviews are enough? This body of work has advanced the evidence base for sample size estimation in qualitative inquiry during the design phase of a study, prior to data collection, but it does not provide qualitative researchers with a simple and reliable way to determine the adequacy of sample sizes during and/or after data collection. Using the principle of saturation as a foundation, we describe and validate a simple-to-apply method for assessing and reporting on saturation in the context of inductive thematic analyses. Following a review of the empirical research on data saturation and sample size estimation in qualitative research, we propose an alternative way to evaluate saturation that overcomes the shortcomings and challenges associated with existing methods identified in our review. Our approach includes three primary elements in its calculation and assessment: Base Size, Run Length, and New Information Threshold. We additionally propose a more flexible approach to reporting saturation. To validate our method, we use a bootstrapping technique on three existing thematically coded qualitative datasets generated from in-depth interviews. Results from this analysis indicate the method we propose to assess and report on saturation is feasible and congruent with findings from earlier studies.
Intraclass correlation - A discussion and demonstration of basic features
A re-analysis of intraclass correlation (ICC) theory is presented together with Monte Carlo simulations of ICC probability distributions. A partly revised and simplified theory of the single-score ICC is obtained, together with an alternative and simple recipe for its use in reliability studies. Our main, practical conclusion is that in the analysis of a reliability study it is neither necessary nor convenient to start from an initial choice of a specified statistical model. Rather, one may impartially use all three single-score ICC formulas. A near equality of the three ICC values indicates the absence of bias (systematic error), in which case the classical (one-way random) ICC may be used. A consistency ICC larger than absolute agreement ICC indicates the presence of non-negligible bias; if so, classical ICC is invalid and misleading. An F-test may be used to confirm whether biases are present. From the resulting model (without or with bias) variances and confidence intervals may then be calculated. In presence of bias, both absolute agreement ICC and consistency ICC should be reported, since they give different and complementary information about the reliability of the method. A clinical example with data from the literature is given.
Orienting the causal relationship between imprecisely measured traits using GWAS summary data
Inference about the causal structure that induces correlations between two traits can be achieved by combining genetic associations with a mediation-based approach, as is done in the causal inference test (CIT). However, we show that measurement error in the phenotypes can lead to the CIT inferring the wrong causal direction, and that increasing sample sizes has the adverse effect of increasing confidence in the wrong answer. This problem is likely to be general to other mediation-based approaches. Here we introduce an extension to Mendelian randomisation, a method that uses genetic associations in an instrumentation framework, that enables inference of the causal direction between traits, with some advantages. First, it can be performed using only summary level data from genome-wide association studies; second, it is less susceptible to bias in the presence of measurement error or unmeasured confounding. We apply the method to infer the causal direction between DNA methylation and gene expression levels. Our results demonstrate that, in general, DNA methylation is more likely to be the causal factor, but this result is highly susceptible to bias induced by systematic differences in measurement error between the platforms, and by horizontal pleiotropy. We emphasise that, where possible, implementing MR and appropriate sensitivity analyses alongside other approaches such as CIT is important to triangulate reliable conclusions about causality.
Temperature effects on the calculation of the functional derivative of Tc with respect to α2F(ω)
The functional derivative of the superconducting transition temperature Tc with respect to the electron-phonon coupling function [Formula: see text] permits identifying the frequency regions where phonons are most effective in raising Tc. This work presents an analysis of temperature effects on the calculation of the δTc/δα2F(ω) and μ* parameters. The results may permit establishing that the variation of the temperature in the δTc/δα2F(ω) and μ* parameter allows establishing patterns and conditions that are possibly related to the physical conditions in the superconducting state, with implications on the theoretical estimation of the Tc.
mixOmics: An R package for 'omics feature selection and multiple data integration
The advent of high throughput technologies has led to a wealth of publicly available 'omics data coming from different sources, such as transcriptomics, proteomics, metabolomics. Combining such large-scale biological data sets can lead to the discovery of important biological insights, provided that relevant information can be extracted in a holistic manner. Current statistical approaches have been focusing on identifying small subsets of molecules (a 'molecular signature') to explain or predict biological conditions, but mainly for a single type of 'omics. In addition, commonly used methods are univariate and consider each biological feature independently. We introduce mixOmics, an R package dedicated to the multivariate analysis of biological data sets with a specific focus on data exploration, dimension reduction and visualisation. By adopting a systems biology approach, the toolkit provides a wide range of methods that statistically integrate several data sets at once to probe relationships between heterogeneous 'omics data sets. Our recent methods extend Projection to Latent Structure (PLS) models for discriminant analysis, for data integration across multiple 'omics data or across independent studies, and for the identification of molecular signatures. We illustrate our latest mixOmics integrative frameworks for the multivariate analyses of 'omics data available from the package.
Transcriptomics technologies
Transcriptomics technologies are the techniques used to study an organism's transcriptome, the sum of all of its RNA transcripts. The information content of an organism is recorded in the DNA of its genome and expressed through transcription. Here, mRNA serves as a transient intermediary molecule in the information network, whilst noncoding RNAs perform additional diverse functions. A transcriptome captures a snapshot in time of the total transcripts present in a cell. The first attempts to study the whole transcriptome began in the early 1990s, and technological advances since the late 1990s have made transcriptomics a widespread discipline. Transcriptomics has been defined by repeated technological innovations that transform the field. There are two key contemporary techniques in the field: microarrays, which quantify a set of predetermined sequences, and RNA sequencing (RNA-Seq), which uses high-throughput sequencing to capture all sequences. Measuring the expression of an organism's genes in different tissues, conditions, or time points gives information on how genes are regulated and reveals details of an organism's biology. It can also help to infer the functions of previously unannotated genes. Transcriptomic analysis has enabled the study of how gene expression changes in different organisms and has been instrumental in the understanding of human disease. An analysis of gene expression in its entirety allows detection of broad coordinated trends which cannot be discerned by more targeted assays.
Unicycler: Resolving bacterial genome assemblies from short and long sequencing reads
The Illumina DNA sequencing platform generates accurate but short reads, which can be used to produce accurate but fragmented genome assemblies. Pacific Biosciences and Oxford Nanopore Technologies DNA sequencing platforms generate long reads that can produce complete genome assemblies, but the sequencing is more expensive and error-prone. There is significant interest in combining data from these complementary sequencing technologies to generate more accurate \"hybrid\" assemblies. However, few tools exist that truly leverage the benefits of both types of data, namely the accuracy of short reads and the structural resolving power of long reads. Here we present Unicycler, a new tool for assembling bacterial genomes from a combination of short and long reads, which produces assemblies that are accurate, complete and cost-effective. Unicycler builds an initial assembly graph from short reads using the de novo assembler SPAdes and then simplifies the graph using information from short and long reads. Unicycler uses a novel semi-global aligner to align long reads to the assembly graph. Tests on both synthetic and real reads show Unicycler can assemble larger contigs with fewer misassemblies than other hybrid assemblers, even when long-read depth and accuracy are low. Unicycler is open source (GPLv3) and available at github.com/rrwick/Unicycler.
The PRISMA 2020 statement: An updated guideline for reporting systematic reviews
Matthew Page and co-authors describe PRISMA 2020, an updated reporting guideline for systematic reviews and meta-analyses.
OpenMM 7: Rapid development of high performance algorithms for molecular dynamics
OpenMM is a molecular dynamics simulation toolkit with a unique focus on extensibility. It allows users to easily add new features, including forces with novel functional forms, new integration algorithms, and new simulation protocols. Those features automatically work on all supported hardware types (including both CPUs and GPUs) and perform well on all of them. In many cases they require minimal coding, just a mathematical description of the desired function. They also require no modification to OpenMM itself and can be distributed independently of OpenMM. This makes it an ideal tool for researchers developing new simulation methods, and also allows those new methods to be immediately available to the larger community.